Skip to content

Conversation

@atheendre130505
Copy link
Contributor

@atheendre130505 atheendre130505 commented Nov 1, 2025

Fix map_query_sql benchmark duplicate key error
Description
The build_keys() function was generating 1000 random keys from range 0..9999, which could result in duplicate keys due to the birthday paradox. The map() function requires unique keys, causing the benchmark to fail with: Execution("map key must be unique, duplicate key found: {key}")
This fix ensures all generated keys are unique by:
Using a HashSet to track seen keys
Only adding keys to the result if they haven't been seen before
Continuing to generate until exactly 1000 unique keys are produced
Fixes #18421
Which issue does this PR close?
Closes #18421
Rationale for this change
The benchmark was non-deterministic: it could pass or fail depending on random key generation. With 1000 keys from a range of 9999 values, collisions are likely (~50% chance), making the benchmark unreliable. This change ensures uniqueness so the benchmark consistently succeeds and accurately measures map function performance.
What changes are included in this PR?
Added use std::collections::HashSet; import
Modified build_keys() to:
Track generated keys using a HashSet
Only add keys if they are unique
Continue generating until exactly 1000 unique keys are produced
File changed: datafusion/core/benches/map_query_sql.rs
Code changes:
Added HashSet import at the top of the file
Replaced simple loop with uniqueness-checking logic in build_keys() function
Are these changes tested?
The fix was verified by:
Logic review: the HashSet approach guarantees uniqueness
Code review: changes follow Rust best practices
No linter errors
The benchmark itself serves as the test — running cargo bench -p datafusion --bench map_query_sql should now complete without errors. Before this fix, the benchmark would fail with duplicate key errors in a significant portion of runs.
Are there any user-facing changes?
No user-facing changes. This is an internal benchmark fix that ensures the map_query_sql benchmark runs reliably. It does not affect the public API or any runtime behavior of DataFusion.

The build_keys() function was generating 1000 random keys from range 0..9999,
which could result in duplicate keys due to the birthday paradox. The map()
function requires unique keys, causing the benchmark to fail with:
'Execution("map key must be unique, duplicate key found: {key}")'

This fix ensures all generated keys are unique by:
- Using a HashSet to track seen keys
- Only adding keys to the result if they haven't been seen before
- Continuing to generate until exactly 1000 unique keys are produced

Fixes apache#18421
@github-actions github-actions bot added the core Core DataFusion crate label Nov 1, 2025
@Omega359
Copy link
Contributor

Omega359 commented Nov 2, 2025

LGTM.

Copy link
Contributor

@Jefffrey Jefffrey left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks like need to resolve some conflicts

let mut keys = vec![];
for _ in 0..1000 {
keys.push(rng.random_range(0..9999).to_string());
let mut seen = HashSet::with_capacity(1000);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We could also make keys a HashSet and just keep inserting into it until it reaches 1000 instead of having both keys and seen

@Jefffrey
Copy link
Contributor

Took the liberty of pushing some commits to get this PR over the line

@Jefffrey Jefffrey added this pull request to the merge queue Nov 17, 2025
@Jefffrey
Copy link
Contributor

Thanks @atheendre130505 for initiating this

Merged via the queue into apache:main with commit 5d5a276 Nov 17, 2025
28 checks passed
logan-keede pushed a commit to logan-keede/datafusion that referenced this pull request Nov 23, 2025
Fix map_query_sql benchmark duplicate key error
Description
The build_keys() function was generating 1000 random keys from range
0..9999, which could result in duplicate keys due to the birthday
paradox. The map() function requires unique keys, causing the benchmark
to fail with: Execution("map key must be unique, duplicate key found:
{key}")
This fix ensures all generated keys are unique by:
Using a HashSet to track seen keys
Only adding keys to the result if they haven't been seen before
Continuing to generate until exactly 1000 unique keys are produced
Fixes apache#18421
Which issue does this PR close?
Closes apache#18421
Rationale for this change
The benchmark was non-deterministic: it could pass or fail depending on
random key generation. With 1000 keys from a range of 9999 values,
collisions are likely (~50% chance), making the benchmark unreliable.
This change ensures uniqueness so the benchmark consistently succeeds
and accurately measures map function performance.
What changes are included in this PR?
Added use std::collections::HashSet; import
Modified build_keys() to:
Track generated keys using a HashSet
Only add keys if they are unique
Continue generating until exactly 1000 unique keys are produced
File changed: datafusion/core/benches/map_query_sql.rs
Code changes:
Added HashSet import at the top of the file
Replaced simple loop with uniqueness-checking logic in build_keys()
function
Are these changes tested?
The fix was verified by:
Logic review: the HashSet approach guarantees uniqueness
Code review: changes follow Rust best practices
No linter errors
The benchmark itself serves as the test — running cargo bench -p
datafusion --bench map_query_sql should now complete without errors.
Before this fix, the benchmark would fail with duplicate key errors in a
significant portion of runs.
Are there any user-facing changes?
No user-facing changes. This is an internal benchmark fix that ensures
the map_query_sql benchmark runs reliably. It does not affect the public
API or any runtime behavior of DataFusion.

---------

Co-authored-by: Jefffrey <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

core Core DataFusion crate

Projects

None yet

Development

Successfully merging this pull request may close these issues.

map_query_sql benchmark failing

3 participants